Tags: llm* + retrieval-augmented generation*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A post discussing new techniques developed for parsing and searching PDFs, focusing on turning them into a hierarchical structure for RAG search. The approach involves dynamically generating chunks for searches, sending headers and sub-headers to the Language Model along with relevant chunks.
    2024-06-27 Tags: , , , , , by klotz
  2. The llmsherpa project provides APIs to accelerate Large Language Model (LLM) projects. It includes features like LayoutPDFReader for PDF text parsing, smart chunking for vector search and Retrieval Augmented Generation, and table analysis. It is open-sourced under Apache 2.0 license.
  3. A collection of RAG techniques to help you develop your RAG app into something robust that will last
    2024-06-26 Tags: , by klotz
  4. The article proposes a new framework, LongRAG, that aims to improve the performance of Retrieval-Augmented Generation (RAG) by using long retriever and reader components. LongRAG processes Wikipedia into larger 4K-token units, reducing the total units from 22M to 600K, thus decreasing the burden on the retriever. The top-k retrieved units (≈30K tokens) are then fed to a long-context Language Model for zero-shot answer extraction. LongRAG achieves EM of 62.7% on NQ and 64.3% on HotpotQA (full-wiki), which is on par with the state-of-the-art model.
  5. Learn about the LLM Knowledge Graph Builder, an online tool that uses machine learning models to transform unstructured data into a knowledge graph. This tool is integrated with a Retrieval-Augmented Generation (RAG) chatbot and is part of Neo4j's GraphRAG Ecosystem Tools.
    2024-06-23 Tags: , , , by klotz
  6. This article explains Retrieval Augmented Generation (RAG), a method to reduce the risk of hallucinations in Large Language Models (LLMs) by limiting the context in which they generate answers. RAG is demonstrated using txtai, an open-source embeddings database for semantic search, LLM orchestration, and language model workflows.
  7. This article guides you through the process of building a local RAG (Retrieval-Augmented Generation) system using Llama 3, Ollama for model management, and LlamaIndex as the RAG framework. The tutorial demonstrates how to get a basic local RAG system up and running with just a few lines of code.
    2024-06-21 Tags: , , , , , by klotz
  8. Learn about how to prompt Command R: Understand the structured prompts used for RAG, formatting chat history and tool outputs, and changing sections of the prompt for different tasks.
    2024-06-19 Tags: , , , by klotz
  9. A CLI tool for interacting with local or remote LLMs to retrieve information about files, execute queries, and perform other tasks in a Retrieval-Augmented Generation (RAG) fashion.
    2024-06-21 Tags: , , , by klotz
  10. LlamaIndex comes with a built-in indexing feature, which allows developers to index large datasets efficiently. This makes it easier to search and retrieve information from these datasets, ultimately improving the overall performance of LLM-based applications.
    2024-06-18 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llm+retrieval-augmented generation"

About - Propulsed by SemanticScuttle